117 research outputs found

    Clustering-Based Predictive Process Monitoring

    Full text link
    Business process enactment is generally supported by information systems that record data about process executions, which can be extracted as event logs. Predictive process monitoring is concerned with exploiting such event logs to predict how running (uncompleted) cases will unfold up to their completion. In this paper, we propose a predictive process monitoring framework for estimating the probability that a given predicate will be fulfilled upon completion of a running case. The predicate can be, for example, a temporal logic constraint or a time constraint, or any predicate that can be evaluated over a completed trace. The framework takes into account both the sequence of events observed in the current trace, as well as data attributes associated to these events. The prediction problem is approached in two phases. First, prefixes of previous traces are clustered according to control flow information. Secondly, a classifier is built for each cluster using event data to discriminate between fulfillments and violations. At runtime, a prediction is made on a running case by mapping it to a cluster and applying the corresponding classifier. The framework has been implemented in the ProM toolset and validated on a log pertaining to the treatment of cancer patients in a large hospital

    Incremental Predictive Process Monitoring: How to Deal with the Variability of Real Environments

    Full text link
    A characteristic of existing predictive process monitoring techniques is to first construct a predictive model based on past process executions, and then use it to predict the future of new ongoing cases, without the possibility of updating it with new cases when they complete their execution. This can make predictive process monitoring too rigid to deal with the variability of processes working in real environments that continuously evolve and/or exhibit new variant behaviors over time. As a solution to this problem, we propose the use of algorithms that allow the incremental construction of the predictive model. These incremental learning algorithms update the model whenever new cases become available so that the predictive model evolves over time to fit the current circumstances. The algorithms have been implemented using different case encoding strategies and evaluated on a number of real and synthetic datasets. The results provide a first evidence of the potential of incremental learning strategies for predicting process monitoring in real environments, and of the impact of different case encoding strategies in this setting

    Evaluating Wiki Collaborative Features in Ontology Authoring (Extended abstract)

    Get PDF
    Abstract: This extended abstract summarizes a rigorous investigation about the effectiveness of the impact of wiki collaborative functionalities on the collaborative ontology authoring. The work summarized in this extended abstract has been published in Context. This extended abstract summarizes a rigorous investigation about the impact of wiki collaborative functionalities on ontology modelling, presented in: Good quality ontology modelling often demands for multiple competencies and skills, which are difficult to find in a single person. This results in the need of involving more actors, possibly with different roles and expertise, collaborating towards the ontology construction. Collaborative ontology authoring has been recently widely investigated in the literature A first requirement deals with the collaboration between who knows the domain that is going to be modelled, i.e., the Domain Expert (DE) and who has the technical skills to formalize the domain modelling. i.e., the Knowledge Engineer (KE). Traditional methodologies and tools were mainly based on the idea that knowledge engineers should drive the modelling process (producing ontologies in a formalism which is usually not understandable for domain experts) and domain experts should only report to KEs their knowledge of the domain. However, these methodologies often create an unnecessary extra layer of indirectness, an imbalance between the two roles and the impossibility for the domain experts to understand the modelled ontology. DEs should be actively involved in the ontology modelling process rather than only provide domain knowledge to KEs. A second important requirement deals with the support of distributed teams of actors. Independently of their geographical position or their role, team members should be made aware about the collaborative development of the modelled artefacts, should be supported in the communication of modeling choices, as well as in the work coordination. Wiki tools for the ontology authoring offer an appealing option for tackling these collaborative aspects. Indeed wikis usually provide collaborative features (wiki collaborative 1 Fondazione Bruno Kessler, Via Sommarive, 18, 38123 Trento, dfmchiara|ghidini|rospocher@fbk,e

    Explain, Adapt and Retrain: How to improve the accuracy of a PPM classifier through different explanation styles

    Full text link
    Recent papers have introduced a novel approach to explain why a Predictive Process Monitoring (PPM) model for outcome-oriented predictions provides wrong predictions. Moreover, they have shown how to exploit the explanations, obtained using state-of-the art post-hoc explainers, to identify the most common features that induce a predictor to make mistakes in a semi-automated way, and, in turn, to reduce the impact of those features and increase the accuracy of the predictive model. This work starts from the assumption that frequent control flow patterns in event logs may represent important features that characterize, and therefore explain, a certain prediction. Therefore, in this paper, we (i) employ a novel encoding able to leverage DECLARE constraints in Predictive Process Monitoring and compare the effectiveness of this encoding with Predictive Process Monitoring state-of-the art encodings, in particular for the task of outcome-oriented predictions; (ii) introduce a completely automated pipeline for the identification of the most common features inducing a predictor to make mistakes; and (iii) show the effectiveness of the proposed pipeline in increasing the accuracy of the predictive model by validating it on different real-life datasets

    Outcome-Oriented Prescriptive Process Monitoring Based on Temporal Logic Patterns

    Full text link
    Prescriptive Process Monitoring systems recommend, during the execution of a business process, interventions that, if followed, prevent a negative outcome of the process. Such interventions have to be reliable, that is, they have to guarantee the achievement of the desired outcome or performance, and they have to be flexible, that is, they have to avoid overturning the normal process execution or forcing the execution of a given activity. Most of the existing Prescriptive Process Monitoring solutions, however, while performing well in terms of recommendation reliability, provide the users with very specific (sequences of) activities that have to be executed without caring about the feasibility of these recommendations. In order to face this issue, we propose a new Outcome-Oriented Prescriptive Process Monitoring system recommending temporal relations between activities that have to be guaranteed during the process execution in order to achieve a desired outcome. This softens the mandatory execution of an activity at a given point in time, thus leaving more freedom to the user in deciding the interventions to put in place. Our approach defines these temporal relations with Linear Temporal Logic over finite traces patterns that are used as features to describe the historical process data recorded in an event log by the information systems supporting the execution of the process. Such encoded log is used to train a Machine Learning classifier to learn a mapping between the temporal patterns and the outcome of a process execution. The classifier is then queried at runtime to return as recommendations the most salient temporal patterns to be satisfied to maximize the likelihood of a certain outcome for an input ongoing process execution. The proposed system is assessed using a pool of 22 real-life event logs that have already been used as a benchmark in the Process Mining community.Comment: 38 pages, 6 figures, 8 table

    Genetic algorithms for hyperparameter optimization in predictive business process monitoring

    Get PDF
    Predictive business process monitoring exploits event logs to predict how ongoing (uncompleted) traces will unfold up to their completion. A predictive process monitoring framework collects a range of techniques that allow users to get accurate predictions about the achievement of a goal for a given ongoing trace. These techniques can be combined and their parameters configured in different framework instances. Unfortunately, a unique framework instance that is general enough to outperform others for every dataset, goal or type of prediction is elusive. Thus, the selection and configuration of a framework instance needs to be done for a given dataset. This paper presents a predictive process monitoring framework armed with a hyperparameter optimization method to select a suitable framework instance for a given dataset

    Process Discovery on Deviant Traces and Other Stranger Things

    Get PDF
    As the need to understand and formalise business processes into a model has grown over the last years, the process discovery research field has gained more and more importance, developing two different classes of approaches to model representation: procedural and declarative. Orthogonally to this classification, the vast majority of works envisage the discovery task as a one-class supervised learning process guided by the traces that are recorded into an input log. In this work instead, we focus on declarative processes and embrace the less-popular view of process discovery as a binary supervised learning task, where the input log reports both examples of the normal system execution, and traces representing a “stranger” behaviour according to the domain semantics. We therefore deepen how the valuable information brought by both these two sets can be extracted and formalised into a model that is “optimal” according to user-defined goals. Our approach, namely NegDis, is evaluated w.r.t. other relevant works in this field, and shows promising results regarding both the performance and the quality of the obtained solution

    Discovering Business Processes models expressed as DNF or CNF formulae of Declare constraints

    Get PDF
    In the field of Business Process Management, the Process Discovery task is one of the most important and researched topics. It aims to automatically learn process models starting from a given set of logged execution traces. The majority of the approaches employ procedural languages for describing the discovered models, but declarative languages have been proposed as well. In the latter category there is the Declare language, based on the notion of constraint, and equipped with a formal semantics on LTLf. Also, quite common in the field is to consider the log as a set of positive examples only, but some recent approaches pointed out that a binary classification task (with positive and negative examples) might provide better outcomes. In this paper, we discuss our preliminary work on the adaptation of some existing algorithms for Inductive Logic Programming, to the specific setting of Process Discovery: in particular, we adopt the Declare language with its formal semantics, and the perspective of a binary classification task (i.e., with positive and negative examples

    Semantic annotation of business process models

    Get PDF
    In the last decades, business process models have increasingly been used by companies with different purposes, such as documenting enacted processes or enabling and improving the communication among stakeholders (e.g., designers and implementers). Aside from the differences, all the roles played by process models involve human actors (e.g., business designers, business analysts, re-engineers) and hence demand for readability and ease of use, beyond correctness and reasonable completeness. It often happens, however, that process models are large and intricate, thus resulting potentially difficult to understand and to manage. In this thesis we propose some techniques aimed at supporting business designers and analysts in the management of business process models. The core of the proposal is the enrichment of process models with semantic annotations from domain ontologies and the formalization of both structural and domain information in a shared knowledge base, thus opening to the possibility of exploiting reasoning for supporting business experts in their work. In detail, this thesis investigates some of the services that can be provided on top of the process semantic annotation, as for example, the automatic verification of process constraints, the automated querying of process models or the semi-automatic mining, documentation and modularization of crosscutting concerns. Moreover, special care is devoted to support designers and analysts when process models are not available or they have to be semantically annotated. Specifically, an approach for recovering process models from (Web) applications and some metrics for evaluating the understandability of the recovered models are investigated. Techniques for suggesting candidate semantic annotations are also proposed. The results obtained by applying the presented techniques have been validated by means of case studies, performance evaluations and empirical investigations
    • …
    corecore